recent recommendation
[D] Yan LeCun's recent recommendations : MachineLearning
Maybe LLMs aren't all that great at it yet, but why can't they be thinking? They're producing output that looks like it's the result of thinking. One thing is, that result you're talking about doesn't really correspond to what the LLM "thought" if it actually could be called that. Very simplified explanation from someone who is definitely not an expert. You feed it tokens and you get back a token like "the", right?
Technology: